feat: Add file based Hugging Face datasets to kedro-datasets#1373
feat: Add file based Hugging Face datasets to kedro-datasets#1373iwhalen wants to merge 21 commits into
kedro-datasets#1373Conversation
huggingface.LocalHFDataset to kedro-datasets
deepyaman
left a comment
There was a problem hiding this comment.
High-level concerns:
- Use of fsspec.
- Bespoke split/partition/multi-file handling (whatever you want to call it).
- Handling too many types in one.
I think I'd personally it rather be a lightweight wrapper that delegates more to underlying Hugging Face APIs.
Unless you're convinced, or if you disagree, it's probably worth getting a second opinion/review on the above.
| if protocol == "file": | ||
| _fs_args.setdefault("auto_mkdir", True) | ||
|
|
||
| self._fs = fsspec.filesystem(self._protocol, **_credentials, **_fs_args) |
There was a problem hiding this comment.
The problem with this approach is that Hugging Face also supports remote URIs natively (e.g. https://huggingface.co/docs/datasets/en/package_reference/loading_methods#datasets.load_from_disk). I think we don't want to use fsspec in those cases, because Hugging Face very well could have a more efficient native path.
This is a hard problem to solve; we have similar concerns on the Ibis side (e.g. #1298), but intuitively I'd err on the side of leaving it up to the Hugging Face APIs.
There was a problem hiding this comment.
See comment below on needing a filesystem.
In order to save a DatasetDict we have to be able to access it.
Hugging Face uses fsspec under the hood anyway 🤷
|
|
||
| if self._fs.isdir(load_path): | ||
| paths = { | ||
| PurePosixPath(p).stem: p for p in self._fs.glob(f"{load_path}/*{ext}") |
There was a problem hiding this comment.
Does it have to have the extension, or is this too narrow? Will people come saying they have HDF5 files with .hdf5 extension instead of .h5?
There was a problem hiding this comment.
Actually, look at the C4 example under https://huggingface.co/docs/datasets/en/loading#hugging-face-hub; there's an example with .json.gz extension.
There was a problem hiding this comment.
Hmmm... maybe I'll just glob everything in the directory then?
| } | ||
| return DatasetDict( | ||
| { | ||
| split: loader(path, **self._load_args) |
There was a problem hiding this comment.
It seems you can specify splits in the data_files argument of load_dataset (e.g. https://huggingface.co/docs/datasets/loading#json); would this be preferable to constructing the DatasetDict with multiple Dataset.from_* calls? I haven't looked into the implementation, but I'd be curious if there's a good reason to not use the higher-level API.
Referencing the same C4 example from above (https://huggingface.co/docs/datasets/en/loading#hugging-face-hub), it seems you can also pass the wildcard directly under data_files.
There was a problem hiding this comment.
There's two things happening here:
- Loading a dataset from the Hub (handled by
HFDatasetnow - its not perfect, but I'm ignoring it in this PR). - Loading / saving from a filesystem (what I'm trying to implement here).
Loading
The load_dataset can handle all our needed types with a call like:
load_dataset("csv", data_files="path/to/data.csv")Same goes for parquet, lance, hdf5, json.
Arrow datasets don't play nice like this though and use load_from_disk("path/to/arrow").
Saving
This procedure works for everything but Arrow:
data = Dataset.from_dict({"a": [1,2], "b": [3,4]})
data.to_parquet("data.parquet")
load_dataset("parquet", data_files="data.parquet")I couldn't find a way to save a DatasetDict to a particular format.
This loop is what happens in the DatasetDict itself here:
Which eventually makes it down to this private method that only will save to Arrow format.
This same goes for the individual Dataset objects. You have to use the to_<format> methods there's not a single method to rule them all like there is with loading. Lance and HDF5 don't actually have save methods handled by Hugging Face :(
Agree with your point on file name format strictness though.
| glob_function=self._fs.glob, | ||
| ) | ||
|
|
||
| def _load(self) -> DatasetLike: |
There was a problem hiding this comment.
How do you get an IterableDataset, for example? At the bottom of https://huggingface.co/docs/datasets/en/filesystems, it seems like you could do IterableDataset.from_dict(), but it's not clear you're ever doing that, right?
Thanks for the review @deepyaman! This was a little sloppy you're right. My first thought was also to try to get the Hugging Face API to handle everything. I'll try again on this and update. Otherwise, I'll just make a separate dataset for each format HF supports. Thanks again! |
130d51c to
28c80df
Compare
|
Ok changes are all up. I think this is a bit better.
Unchanged:
|
There was a problem hiding this comment.
Thanks for the contribution and for being so responsive to feedback! The split into separate dataset classes makes sense, and it matches how the rest of kedro-datasets is organized (e.g. pandas.CSVDataset, pandas.ParquetDataset). Also, Arrow genuinely behaves differently from the other formats.
A few high-level suggestions before we iterate on the details:
Scope this PR to the four round-trip formats. I'd suggest removing HDF5Dataset and LanceDataset (and their tests/docs) from this PR. Focus on Arrow, Parquet, CSV, and JSON — these all support full save + load round-trips and share the same base class, so they belong together. The read-only formats are a separate concern that deserves a small follow-up PR where we can get the save() error handling right (e.g. overriding save() directly so the error is raised immediately, rather than letting the base class do type checking, iterable materialization, and path resolution before the error surfaces in _save_dataset).
Deduplicate the tests. The test files for CSV, JSON, and Parquet are near-identical (differing only in class name and extension). Consider a parametrized shared test instead.
0b4bb94 to
212769f
Compare
|
@ElenaKhaustova Ok! I think I addressed most of the changes. There's still a couple things I'm not in love with. I see my tests are failing, so I'll get to those. Just wanted to send a note on some higher level things. Checking for directory in non-Arrow datasetsIn the I thought it was reasonable that, if the provided Then, if the user doesn't tell us what we're looking for in the directory, we throw an error. That's what's happening here in In other words, we can't call I'm not sure if there's a smarter way to do this. Let me know what you think.
|
ElenaKhaustova
left a comment
There was a problem hiding this comment.
Thanks for the updates @iwhalen — this is much improved!
On your two questions:
Directory loading with data_files: You're right — my earlier suggestion to use data_dir was too optimistic about what HF handles automatically for non-Arrow formats. Your approach of requiring explicit data_files is the correct middle ground: it removes the fragile glob-based discovery while still delegating the actual loading to load_dataset.
_build_data_files helper: — this is a good UX call.
A few remaining items in inline comments below.
5910c0a to
8dbdc32
Compare
|
@ElenaKhaustova seems like we're getting closer! I addressed the changes you had above and fixed my doctest issues. Now I'm seeing some failures in CI that seem to be unrelated to the changes in this branch. Any advice? |
ElenaKhaustova
left a comment
There was a problem hiding this comment.
Thanks @iwhalen, this looks great now! All the feedback from the previous round has been addressed cleanly:
__init__is now free of filesystem calls and validation logic, matching the convention used elsewhere inkedro-datasets_save_dataset_dictnow validates thatdata_fileskeys match theDatasetDictsplit names — much better than a silent "file not found" at load timeRuntimeError→DatasetErrorfor consistencyDatasetLikeis now defined once in_base.py- The Arrow docstring (and the others) correctly describe the iterable behavior now
The CI failure is actually from this PR — coverage dropped to 99.88% and the project requires 100%. The uncovered lines are:
_base.py:199-200— theexcept DatasetError: return Falsebranch in_exists()arrow_dataset.py:78-79— the same branch in Arrow's_exists()override
This branch fires when _get_load_path() can't resolve a path — i.e. a versioned dataset with no saved versions yet. Adding a test like this to both test_filesystem_datasets.py and test_arrow_dataset.py should cover it:
def test_exists_no_versions(self, kedro_dataset_cls, path_file):
"""`exists()` returns False (not raises) when no versions are saved yet."""
ds = kedro_dataset_cls(path=path_file, version=Version(None, None))
assert ds.exists() is False|
@ElenaKhaustova Fixed some more issues, including a weird one with chromadb dependencies on python 3.14. It seems like this was known: chroma-core/chroma#2571 (Failure was here). |
|
@iwhalen, thank you for addressing the remaining comments! It looks like the failure isn't from your HF changes. All The only failing job is
That Opik test suite is failing specifically on Windows + Python 3.14 (it passes on every other matrix entry). That's a pre-existing Could you rebase fresh on |
|
The failures on experimental datasets will not block merging PRs btw, so no worries about that. |
f842e49 to
8beeb3d
Compare
|
@ElenaKhaustova Opik changes removed! Not sure how they got in there... Thanks again for all the help. |
huggingface.LocalHFDataset to kedro-datasetskedro-datasets
| save_path = get_filepath_str(self._get_save_path(), self._protocol) | ||
|
|
||
| if isinstance(data, DatasetDict): | ||
| self._save_dataset_dict(data, save_path) |
There was a problem hiding this comment.
QQ: Do you need a data_files dict here to specify filepaths?
There was a problem hiding this comment.
Sorry I see below it's being read from _load_args. Should this be read from save_args instead?
There was a problem hiding this comment.
This is a really good point actually... it should be read from save_args.
However, the way its implemented right now would require a user to specify data_files in both the load and save args.
Concretely, it would have to look like this in yaml:
reviews:
type: huggingface.CSVDataset
path: data/01_raw/reviews
load_args:
data_files:
labels: labels.csv
data: data.csv
save_args:
data_files:
labels: labels.csv
data: data.csvWhich isn't the most user friendly... hmm...
I guess there are two options: swap the saving operation to read from save_args and require it to be specified twice or allow data_files as a top-level argument to the dataset.
In this second option,
reviews:
type: huggingface.CSVDataset
path: data/01_raw/reviews
data_files:
labels: labels.csv
data: data.csvwould work and so would specifying data_files in both the save and load args, but something like:
reviews:
type: huggingface.CSVDataset
path: data/01_raw/reviews
data_files:
labels: labels.csv
data: data.csv
save_args:
data_files:
labels: labels.csv
data: data.csvwould throw an error (regardless of if filenames match or not).
What do you think @ankatiyar? Or maybe there's another option you'd prefer.
There was a problem hiding this comment.
If this information is used for both saving and loading, I don't mind if it's a top level key - you're right having to specify it for both load and save args might be tedious but maybe templating (with OmegaConfigLoader) could also help for that option and this approach can be documented -
_data_files:
labels: labels.csv
data: data.csv
reviews:
type: huggingface.CSVDataset
path: data/01_raw/reviews
load_args:
data_files: ${_data_files}
save_args:
data_files: ${_data_files}
I'm okay with either approach, just using load_args for saving stuff feels not quite right. WDYT? @ElenaKhaustova
There was a problem hiding this comment.
+1 to @ankatiyar that using load_args for save behaviour feels off. I'd go with Option B (top-level data_files) rather than duplicating it across load_args/save_args:
data_filesdescribes the directory layout, which both load and save need to agree on — same shape aspath, which is also top-level.- Single source of truth, no risk of
load_argsandsave_argsdrifting apart. - Surfaces it in the constructor signature instead of hiding it inside an opaque
load_argsdict. - OmegaConf templating works, but relies on users knowing the trick and still produces visually duplicated YAML.
While you're in this code, there's also a related bug worth fixing: _save_dataset_dict currently ignores the filename values in data_files on save — it synthesises f"{save_path}/{split}{ext}" regardless. So:
load_args:
data_files:
labels: my_labels.csv # silently ignored on save
data: my_data.csv…will save to labels.csv / data.csv but then load looks for my_labels.csv / my_data.csv → round-trip fails. The current validation only checks keys, not values, and the test fixtures all use {split}{ext} filenames so this never gets exercised.
So my suggestion:
- Promote
data_filesto a top-level constructor arg. - Raise a clear error if it also appears in
load_argsorsave_args. - Make the save loop use
f"{save_path}/{data_files[split]}"so filenames round-trip. - Add a test with a custom filename (e.g.
{"train": "my_train.csv"}) to lock the behaviour in.
There was a problem hiding this comment.
Great! Agreed. I'll do this.
4bf2680 to
ce98b10
Compare
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
…edro-org#1364) * Add OpikEvaluationDataset Signed-off-by: Laura Couto <[email protected]> * Add unit tests Signed-off-by: Laura Couto <[email protected]> * Lint Signed-off-by: Laura Couto <[email protected]> * Lint Signed-off-by: Laura Couto <[email protected]> * Docstring Signed-off-by: Laura Couto <[email protected]> * Add OpikEvaluationDataset stuff to the readme Signed-off-by: Laura Couto <[email protected]> * Add OpikEvaluationDataset Signed-off-by: Laura Couto <[email protected]> * Add unit tests Signed-off-by: Laura Couto <[email protected]> * Lint Signed-off-by: Laura Couto <[email protected]> * Lint Signed-off-by: Laura Couto <[email protected]> * Docstring Signed-off-by: Laura Couto <[email protected]> * Add OpikEvaluationDataset stuff to the readme Signed-off-by: Laura Couto <[email protected]> * Docs and release note Signed-off-by: Laura Couto <[email protected]> * Typo Signed-off-by: Laura Couto <[email protected]> * Update kedro-datasets/kedro_datasets_experimental/opik/opik_evaluation_dataset.py Co-authored-by: Ravi Kumar Pilla <[email protected]> Signed-off-by: L. R. Couto <[email protected]> * Add more explicit errors in case of connection failure Signed-off-by: Laura Couto <[email protected]> * Opik client flush to prevent async issues Signed-off-by: Laura Couto <[email protected]> * Explicitly explain remote sync behavior Signed-off-by: Laura Couto <[email protected]> * Fix release notes Signed-off-by: Laura Couto <[email protected]> * Rephrase docstrings Signed-off-by: Laura Couto <[email protected]> * Handle UUID more carefully Signed-off-by: Laura Couto <[email protected]> * Wrap _client.flush() in try/except for DatasetError Signed-off-by: Laura Couto <[email protected]> * Update README, add more explicit exception handling Signed-off-by: Laura Couto <[email protected]> * Lint Signed-off-by: Laura Couto <[email protected]> * Enforce UUIDv7 Signed-off-by: Laura Couto <[email protected]> * Lint Signed-off-by: Laura Couto <[email protected]> * Clarify interactions with UUIDv7 on docs Signed-off-by: Laura Couto <[email protected]> * Extract auxiliary functions Signed-off-by: Laura Couto <[email protected]> * Make it so 'non UUIDv7 creates a new row' is very explicit Signed-off-by: Laura Couto <[email protected]> * Update kedro-datasets/kedro_datasets_experimental/opik/opik_evaluation_dataset.py Co-authored-by: ElenaKhaustova <[email protected]> Signed-off-by: L. R. Couto <[email protected]> * Update kedro-datasets/kedro_datasets_experimental/opik/opik_evaluation_dataset.py Co-authored-by: ElenaKhaustova <[email protected]> Signed-off-by: L. R. Couto <[email protected]> * Update kedro-datasets/kedro_datasets_experimental/opik/opik_evaluation_dataset.py Co-authored-by: ElenaKhaustova <[email protected]> Signed-off-by: L. R. Couto <[email protected]> * Update kedro-datasets/kedro_datasets_experimental/opik/README.md Co-authored-by: ElenaKhaustova <[email protected]> Signed-off-by: L. R. Couto <[email protected]> * Update kedro-datasets/kedro_datasets_experimental/opik/opik_evaluation_dataset.py Co-authored-by: ElenaKhaustova <[email protected]> Signed-off-by: L. R. Couto <[email protected]> * Update kedro-datasets/kedro_datasets_experimental/opik/README.md Co-authored-by: ElenaKhaustova <[email protected]> Signed-off-by: L. R. Couto <[email protected]> * Fix docstrings Signed-off-by: Laura Couto <[email protected]> * Minor fix on docstring Signed-off-by: Laura Couto <[email protected]> * Minor fix on docstring Signed-off-by: Laura Couto <[email protected]> * Doc indent Signed-off-by: Laura Couto <[email protected]> * Indent Signed-off-by: Laura Couto <[email protected]> * Lint Signed-off-by: Laura Couto <[email protected]> --------- Signed-off-by: Laura Couto <[email protected]> Signed-off-by: L. R. Couto <[email protected]> Co-authored-by: Ravi Kumar Pilla <[email protected]> Co-authored-by: ElenaKhaustova <[email protected]> Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
…asetdict key names. Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
Signed-off-by: iwhalen <[email protected]>
ce98b10 to
adbf9a4
Compare
Signed-off-by: iwhalen <[email protected]>
|
@ElenaKhaustova @ankatiyar thanks again for all the feedback! There's a seemingly unrelated segfault happening now, but the changes are up. The only place I had to make an arbitrary decision was in the case that Its a weird edge case, but I think we want people to use the top level |
deepyaman
left a comment
There was a problem hiding this comment.
I just took a quick pass, but LGTM overall; one nit comment, and then agree with some feedback you already are going to address.
| # For documentation builds that might fail due to dependency issues | ||
| # https://github.com/pylint-dev/pylint/issues/4300#issuecomment-1043601901 |
There was a problem hiding this comment.
I know this is dumb, but can we include this in each block, in the spirit of blind consistency with how other datasets are. 😅
| """ | ||
|
|
||
| BUILDER: ClassVar[str] = "parquet" | ||
| EXTENSION: ClassVar[str] = ".parquet" |
There was a problem hiding this comment.
Just to confirm, this extension is only used if you don't specify save_data_files? Does this also work for loading e.g. .parquet.snappy or some other sorts of extensions?
I think so, but want to be sure.
Description
Adds datasets for interacting with Hugging Face datasets on a file system.
Development notes
Added docs, tests, ran in a fresh pipeline.
Iterable and in-memory versions have both been tested as well.
Note
I couldn't figure out a good way to save an
IterableDatasetwithout looping through it entirely first.Maybe there's a better way someone knows about.
Updated
jsonschema/kedro-catalog.1.00.json.Developer Certificate of Origin
We need all contributions to comply with the Developer Certificate of Origin (DCO). All commits must be signed off by including a
Signed-off-byline in the commit message. See our wiki for guidance.If your PR is blocked due to unsigned commits, then you must follow the instructions under "Rebase the branch" on the GitHub Checks page for your PR. This will retroactively add the sign-off to all unsigned commits and allow the DCO check to pass.
Checklist
jsonschema/kedro-catalog-X.XX.jsonif necessaryRELEASE.mdfile